179 research outputs found

    An infrastructure service recommendation system for cloud applications with real-time QoS requirement constraints

    Get PDF
    The proliferation of cloud computing has revolutionized the hosting and delivery of Internet-based application services. However, with the constant launch of new cloud services and capabilities almost every month by both big (e.g., Amazon Web Service and Microsoft Azure) and small companies (e.g., Rackspace and Ninefold), decision makers (e.g., application developers and chief information officers) are likely to be overwhelmed by choices available. The decision-making problem is further complicated due to heterogeneous service configurations and application provisioning QoS constraints. To address this hard challenge, in our previous work, we developed a semiautomated, extensible, and ontology-based approach to infrastructure service discovery and selection only based on design-time constraints (e.g., the renting cost, the data center location, the service feature, etc.). In this paper, we extend our approach to include the real-time (run-time) QoS (the end-to-end message latency and the end-to-end message throughput) in the decision-making process. The hosting of next-generation applications in the domain of online interactive gaming, large-scale sensor analytics, and real-time mobile applications on cloud services necessitates the optimization of such real-time QoS constraints for meeting service-level agreements. To this end, we present a real-time QoS-aware multicriteria decision-making technique that builds over the well-known analytic hierarchy process method. The proposed technique is applicable to selecting Infrastructure as a Service (IaaS) cloud offers, and it allows users to define multiple design-time and real-time QoS constraints or requirements. These requirements are then matched against our knowledge base to compute the possible best fit combinations of cloud services at the IaaS layer. We conducted extensive experiments to prove the feasibility of our approach

    HoneyCode: Automating Deceptive Software Repositories with Deep Generative Models

    Get PDF
    We propose HoneyCode, an architecture for the generation of synthetic software repositories for cyber deception. The synthetic repositories have the characteristics of real software, including language features, file names and extensions, but contain no real intellectual property. The fake software can be used as a honeypot or form part of a deceptive environment. Existing approaches to software repository generation lack scalability due to reliance on hand-crafted structures for specific languages. Our approach is language agnostic and learns the underlying representations of repository structures, filenames and file content through a novel Tree Recurrent Network (TRN) and two recurrent networks (RNN) respectively. Each stage of the sequential generation process utilises features from prior steps, which increases the honey repository’s authenticity and consistency. Experiments show TRN generates tree samples that reduce degree mean maximal distance (MMD) by 90-92% and depth MMD by 75-86% to a held out test data set in comparison to recent deep graph generators and a baseline random tree generator. In addition, our RNN models generate convincing filenames with authentic syntax and realistic file content

    CATTmew: Defeating Software-only Physical Kernel Isolation

    Full text link
    All the state-of-the-art rowhammer attacks can break the MMU-enforced inter-domain isolation because the physical memory owned by each domain is adjacent to each other. To mitigate these attacks, physical domain isolation, introduced by CATT, physically separates each domain by dividing the physical memory into multiple partitions and keeping each partition occupied by only one domain. CATT implemented physical kernel isolation as the first generic and practical software-only defense to protect kernel from being rowhammered as kernel is one of the most appealing targets. In this paper, we develop a novel exploit that could effectively defeat the physical kernel isolation and gain both root and kernel privileges. Our exploit can work without exhausting the page cache or the system memory, or relying on the information of the virtual-to-physical address mapping. The exploit is motivated by our key observation that the modern OSes have double-owned kernel buffers (e.g., video buffers and SCSI Generic buffers) owned concurrently by the kernel and user domains. The existence of such buffers invalidates the physical kernel isolation and makes the rowhammer-based attack possible again. Existing conspicuous rowhammer attacks achieving the root/kernel privilege escalation exhaust the page cache or even the whole system memory. Instead, we propose a new technique, named memory ambush. It is able to place the hammerable double-owned kernel buffers physically adjacent to the target objects (e.g., page tables) with only a small amount of memory. As a result, our exploit is stealthier and has fewer memory footprints. We also replace the inefficient rowhammer algorithm that blindly picks up addresses to hammer with an efficient one. Our algorithm selects suitable addresses based on an existing timing channel.Comment: Preprint of the work accepted at the IEEE Transactions on Dependable and Secure Computing 201
    corecore